|
Author |
Thread Statistics | Show CCP posts - 8 post(s) |
|

CCP FoxFour
C C P C C P Alliance
4172

|
Posted - 2015.11.09 13:58:04 -
[1] - Quote
Pete Butcher wrote:It would be very nice if some dev actually responded, and even nicer if someone takes a look at the problem. Today even 100r/s limit is too much for CREST and the server closes connections all the time.
Care to give some more details? A trace route, times of the problems, your IP or user agent so I can look into it, anything to actually help track it down?
@CCP_FoxFour // Technical Designer // Team Size Matters
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
|

CCP FoxFour
C C P C C P Alliance
4173

|
Posted - 2015.11.09 14:40:27 -
[2] - Quote
Pete Butcher wrote:CCP FoxFour wrote:Pete Butcher wrote:It would be very nice if some dev actually responded, and even nicer if someone takes a look at the problem. Today even 100r/s limit is too much for CREST and the server closes connections all the time. Care to give some more details? A trace route, times of the problems, your IP or user agent so I can look into it, anything to actually help track it down? Traceroute: Tracing route to public-crest.eveonline.com [87.237.38.221] over a maximum of 30 hops:
1 <1 ms <1 ms <1 ms 192.168.0.1 2 14 ms 9 ms 11 ms 84-10-168-1.static.chello.pl [84.10.168.1] 3 12 ms 14 ms 9 ms 89-75-13-65.infra.chello.pl [89.75.13.65] 4 10 ms 14 ms 11 ms pl-waw04a-rc1-ae17-2114.aorta.net [84.116.252.57 ] 5 9 ms 11 ms 14 ms pl-waw02a-ri1-ae1-0.aorta.net [84.116.138.90] 6 9 ms 13 ms 11 ms pni-pl-waw05a-as1299-telia.aorta.net [213.46.178 .50] 7 37 ms 38 ms 35 ms hbg-bb1-link.telia.net [80.91.251.35] 8 40 ms 38 ms 41 ms ldn-bb3-link.telia.net [62.115.142.125] 9 37 ms 38 ms 39 ms ldn-b3-link.telia.net [80.91.251.165] 10 eveonline-ic-138015-ldn-b3.c.telia.net [213.248.83.198] reports: Destinati on net unreachable.
Trace complete.
Seems like one of the nodes is down atm, but the problem persists pretty much all the time I started testing public CREST about a week ago. User agent: Evernus 1.36 I'm experimenting with different r/s limits and 100 seems to work most of the time. 150 in 100% cases causes closed connections by remote host. Everything in between is a lottery. I also found it might be linked to the number of requests sent. < 1k requests seem to be handled properly most of the time, but making ~30k requests (even with <150r/s limit) causes errors.
At least looking at that traceroute you're not even making to our hardware. That last hop that was unreachable was the last node in the Telia network before our network.
Something to consider trying is making your requests through a proxy outside the Telia network.
@CCP_FoxFour // Technical Designer // Team Size Matters
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
|

CCP FoxFour
C C P C C P Alliance
4173

|
Posted - 2015.11.09 15:16:30 -
[3] - Quote
Do you by any chance mean "Evernus 1.35" ?? I don't see any "Evernus 1.36"
@CCP_FoxFour // Technical Designer // Team Size Matters
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
|

CCP FoxFour
C C P C C P Alliance
4174

|
Posted - 2015.11.09 16:58:44 -
[4] - Quote
First, thanks for the continues pestering on this issue Pete. It has always sort of been a problem we are aware of but I will admit no action has really been taken to try and sort it out directly. We have done some other things that we thought might also help improve it but nothing has really worked.
Over the next few weeks I am going to be looking at digging into this, adding some telemetry to the CREST code, and just trying to figure out where the slowness is coming from.
I don't know when or if we will see improvements come from this, but I wanted to let you know we are looking.
@CCP_FoxFour // Technical Designer // Team Size Matters
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
|

CCP FoxFour
C C P C C P Alliance
4177

|
Posted - 2015.11.19 14:45:50 -
[5] - Quote
Pete Butcher wrote:Kazuno Ozuwara wrote:But bigger responses tend to be very slow (looking at market types endpoint: requesting 13 huge pages concurrently consumes ~20 sec, and ~6 sec when cached). Also using filters is not good for server side caching. Maybe market data per market group will be optimal. It's not a problem when you do it right. Make a cache per type per region, which is most likely done anyway, and just make a union of the result set, and send it to the user. I don't imagine it being a big deal. Note that such approach will always be faster than making thousands upon thousands of requests to fetch the exact same data - both on server and client side. Somebody just has to make it. Right now, everybody looses:
- users - CREST is both slow and unreliable, leading to both developer and end-user frustration
- CCP - the servers are being hammered by thousands/millions of unnecessary requests
The performance of our backend servers seems to be... well fine. Trying to figure out why things take so long as we speak. Assuming things were fast, which we admit they are not, we wouldn't mind the large number of requests.
@CCP_FoxFour // Technical Designer // Team Tech Co
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
|

CCP FoxFour
C C P C C P Alliance
4179

|
Posted - 2015.11.21 12:28:02 -
[6] - Quote
Out of curiosity how many connections are most people opening?
@CCP_FoxFour // Technical Designer // Team Tech Co
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
|

CCP FoxFour
C C P C C P Alliance
4179

|
Posted - 2015.11.21 21:22:08 -
[7] - Quote
Iam Widdershins wrote:CCP FoxFour wrote:Out of curiosity how many connections are most people opening? I WAS opening about as many as I needed to get a response rate up near the limit, up to 150 actually. I had 150 threads running through two timed semaphores to give a 3ms minimum spacing (not sure if this is necessary but it seems polite) and to keep the rate at 150 requests per any given second. If everything were to go spectacularly well this could open one connection per thread. Since I am accessing several endpoints with varying ratelimits I've added a third semaphore that introduces a hard limit on the number of concurrent requests in process; even with 60 connections opened at once, or indeed anything more than about 20, the error rate begins to skyrocket, as at the old limit of 150 I would be receiving JSON failure messages on around 15-30% of my calls.
https://eveonline-third-party-documentation.readthedocs.org/en/latest/crest/intro/
Check down the bottom, notice section on rate limiting, see max concurrent connections is 20. :) Anything more than 20 and our NGINX proxy is just going to start saying **** you. I assume it tosses 503's but not 100%.
@CCP_FoxFour // Technical Designer // Team Tech Co
Third-party developer? Check out the official developers site for dev blogs, resources, and more.
|
|
|
|
|